Goto

Collaborating Authors

 matching and generalized label shift


Review for NeurIPS paper: Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift

Neural Information Processing Systems

Moreover, it would also be helpful to have a sketch of proof for Theorems 3.1 and 3.4 so that it is possible to have a general idea of the main steps. Minor: Labelled (line 15) vs Labeled (line 57): pick one and be consistent throughout the text.


Review for NeurIPS paper: Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift

Neural Information Processing Systems

This paper proposes a new approach to unsupervised domain adaptation (UDA) under label shift. The idea is a generalized label shift (GLS) assumption where conditional invariance is placed in representation rather than input space. The main contributions include 1) generalizing the information-theoretic lower bound of error to multiple classes; 2) devising generalization bounds in the target domain based on the balanced error rate and conditional error gap; 3) deriving necessary and sufficient conditions for GLS; 4) efficient importance reweighting algorithm for target/source label distributions using the integral probability metric. Overall, all reviewers including myself find the GLS framework interesting, providing an important new approach to UDA that can be flexibility embedded in existing methods. The theoretical foundation is also solid.


Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift

Neural Information Processing Systems

Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting, by learning domain-invariant representations. However, recent work has shown limitations of this approach when label distributions differ between the source and target domains. In this paper, we propose a new assumption, \textit{generalized label shift} ( \glsa), to improve robustness against mismatched label distributions. Under \glsa, we provide theoretical guarantees on the transfer performance of any classifier. We also devise necessary and sufficient conditions for \glsa to hold, by using an estimation of the relative class weights between domains and an appropriate reweighting of samples.